12 research outputs found

    Extracting Statistically Significant Behaviour from Fish Tracking Data With and Without Large Dataset Cleaning

    Get PDF
    Extracting a statistically significant result from video of natural phenomenon can be difficult for two reasons: (i) there can be considerable natural variation in the observed behaviour and (ii) computer vision algorithms applied to natural phenomena may not perform correctly on a significant number of samples. This study presents one approach to clean a large noisy visual tracking dataset to allow extracting statistically sound results from the image data. In particular, analyses of 3.6 million underwater trajectories of a fish with the water temperature at the time of acquisition are presented. Although there are many false detections and incorrect trajectory assignments, by a combination of data binning and robust estimation methods, reliable evidence for an increase in fish speed as water temperature increases are demonstrated. Then, a method for data cleaning which removes outliers arising from false detections and incorrect trajectory assignments using a deep learning‐based clustering algorithm is proposed. The corresponding results show a rise in fish speed as temperature goes up. Several statistical tests applied to both cleaned and not‐cleaned data confirm that both results are statistically significant and show an increasing trend. However, the latter approach also generates a cleaner dataset suitable for other analysis

    Analytics and Blockchain for Data Integrity in the Pharmaceuticals Industry

    No full text
    The data quantity explosion that we witnessed during the last two decades has lead industrial organizations to exploit this sheer amount of data for tasks that previously would seem impossible. However, larger data volumes, draw together a series of drawbacks that affect data quality and integrity and this becomes more evident in government supervised industrial settings. In many of such cases, public authorities have defined sets of principles (e.g., ALCOA+) regarding data management that industrial organizations must abide to, and either deliberate or not violation of these standards most often comes with severe legal consequences. As a matter of fact, in an effort to follow as much as humanly possible to such principles, pharmaceuticals industries invest heavily in resources to maintain high quality standards in their data urging for automated methods for calculating, monitoring and predicting compliance.Also, in complex manufacturing and production lines, data analytics provide means for real-time and continuous monitoring of large numbers of sensor variables and categorical or numerical values where higher order conclusions can be derived and taken into account when business process optimizations are considered. In this work we present an easy-to-use integrated platform for real-time raw sensor data monitoring and pre-processing in pharmaceuticals production lines combining blockchain storage for data integrity and deep-learning capabilities for data analytics. Additionally, the platform is able to calculate, monitor and predict compliance to the ALCOA+ set of principles, reducing substantially the time and effort needed to maintain and calculate such complex parameters manually. © 2022 IEEE

    Generative Adversarial Networks Conditioned By Brain Signals

    No full text
    Recent advancements in generative adversarial networks (GANs), using deep convolutional models, have supported the development of image generation techniques able to reach satisfactory levels of realism. Further improvements have been proposed to condition GANs to generate images matching a specific object category or a short text description. In this work, we build on the latter class of approaches and investigate the possibility of driving and conditioning the image generation process by means of brain signals recorded, through an electroencephalograph (EEG), while users look at images from a set of 40 ImageNet object categories with the objective of generating the seen images. To accomplish this task, we first demonstrate that brain activity EEG signals encode visually-related information that allows us to accurately discriminate between visual object categories and, accordingly, we extract a more compact class-dependent representation of EEG data using recurrent neural networks. Afterwards, we use the learned EEG manifold to condition image generation employing GANs, which, during inference, will read EEG signals and convert them into images. We tested our generative approach using EEG signals recorded from six subjects while looking at images of the aforementioned 40 visual classes. The results show that for classes represented by well-defined visual patterns (e.g., pandas, airplane, etc.), the generated images are realistic and highly resemble those evoking the EEG signals used for conditioning GANs, resulting in an actual reading-the-mind process

    Deep Learning Human Mind For Automated Visual Classification

    No full text
    What if we could effectively read the mind and transfer human visual capabilities to computer vision methods? In this paper, we aim at addressing this question by developing the first visual object classifier driven by human brain signals. In particular, we employ EEG data evoked by visual object stimuli combined with Recurrent Neural Networks (RNN) to learn a discriminative brain activity manifold of visual categories in a reading the mind effort. Afterward, we transfer the learned capabilities to machines by training a Convolutional Neural Network (CNN)-based regressor to project images onto the learned manifold, thus allowing machines to employ human brain-based features for automated visual classification. We use a 128-channel EEG with active electrodes to record brain activity of several subjects while looking at images of 40 ImageNet object classes. The proposed RNN-based approach for discriminating object classes using brain signals reaches an average accuracy of about 83%, which greatly outperforms existing methods attempting to learn EEG visual object representations. As for automated object categorization, our human brain-driven approach obtains competitive performance, comparable to those achieved by powerful CNN models and it is also able to generalize over different visual datasets
    corecore